418 research outputs found

    Strong Selection Significantly Increases Epistatic Interactions in the Long-Term Evolution of a Protein

    Full text link
    Epistatic interactions between residues determine a protein's adaptability and shape its evolutionary trajectory. When a protein experiences a changed environment, it is under strong selection to find a peak in the new fitness landscape. It has been shown that strong selection increases epistatic interactions as well as the ruggedness of the fitness landscape, but little is known about how the epistatic interactions change under selection in the long-term evolution of a protein. Here we analyze the evolution of epistasis in the protease of the human immunodeficiency virus type 1 (HIV-1) using protease sequences collected for almost a decade from both treated and untreated patients, to understand how epistasis changes and how those changes impact the long-term evolvability of a protein. We use an information-theoretic proxy for epistasis that quantifies the co-variation between sites, and show that positive information is a necessary (but not sufficient) condition that detects epistasis in most cases. We analyze the "fossils" of the evolutionary trajectories of the protein contained in the sequence data, and show that epistasis continues to enrich under strong selection, but not for proteins whose environment is unchanged. The increase in epistasis compensates for the information loss due to sequence variability brought about by treatment, and facilitates adaptation in the increasingly rugged fitness landscape of treatment. While epistasis is thought to enhance evolvability via valley-crossing early-on in adaptation, it can hinder adaptation later when the landscape has turned rugged. However, we find no evidence that the HIV-1 protease has reached its potential for evolution after 9 years of adapting to a drug environment that itself is constantly changing.Comment: 25 pages, 9 figures, plus Supplementary Material including Supplementary Text S1-S7, Supplementary Tables S1-S2, and Supplementary Figures S1-2. Version that appears in PLoS Genetic

    $1.00 per RT #BostonMarathon #PrayForBoston: analyzing fake content on Twitter

    Get PDF
    This study found that 29% of the most viral content on Twitter during the Boston bombing crisis were rumors and fake content.AbstractOnline social media has emerged as one of the prominent channels for dissemination of information during real world events. Malicious content is posted online during events, which can result in damage, chaos and monetary losses in the real world. We analyzed one such media i.e. Twitter, for content generated during the event of Boston Marathon Blasts, that occurred on April, 15th, 2013. A lot of fake content and malicious profiles originated on Twitter network during this event. The aim of this work is to perform in-depth characterization of what factors influenced in malicious content and profiles becoming viral. Our results showed that 29% of the most viral content on Twitter, during the Boston crisis were rumors and fake content; while 51% was generic opinions and comments; and rest was true information. We found that large number of users with high social reputation and verified accounts were responsible for spreading the fake content. Next, we used regression prediction model, to verify that, overall impact of all users who propagate the fake content at a given time, can be used to estimate the growth of that content in future. Many malicious accounts were created on Twitter during the Boston event, that were later suspended by Twitter. We identified over six thousand such user profiles, we observed that the creation of such profiles surged considerably right after the blasts occurred. We identified closed community structure and star formation in the interaction network of these suspended profiles amongst themselves

    Characterizing Pedophile Conversations on the Internet using Online Grooming

    Full text link
    Cyber-crime targeting children such as online pedophile activity are a major and a growing concern to society. A deep understanding of predatory chat conversations on the Internet has implications in designing effective solutions to automatically identify malicious conversations from regular conversations. We believe that a deeper understanding of the pedophile conversation can result in more sophisticated and robust surveillance systems than majority of the current systems relying only on shallow processing such as simple word-counting or key-word spotting. In this paper, we study pedophile conversations from the perspective of online grooming theory and perform a series of linguistic-based empirical analysis on several pedophile chat conversations to gain useful insights and patterns. We manually annotated 75 pedophile chat conversations with six stages of online grooming and test several hypothesis on it. The results of our experiments reveal that relationship forming is the most dominant online grooming stage in contrast to the sexual stage. We use a widely used word-counting program (LIWC) to create psycho-linguistic profiles for each of the six online grooming stages to discover interesting textual patterns useful to improve our understanding of the online pedophile phenomenon. Furthermore, we present empirical results that throw light on various aspects of a pedophile conversation such as probability of state transitions from one stage to another, distribution of a pedophile chat conversation across various online grooming stages and correlations between pre-defined word categories and online grooming stages

    INFORMAL RISK SHARING WITHIN CASTES IN INDIA

    Get PDF
    Master'sMASTER OF SOCIAL SCIENCE

    Analyzing Big Data and Business Intelligence

    Get PDF
    Business intelligence and analytics (BI&A) has come out as a prominent area of study for both practitioners and researchers, reflecting the magnitude and impact of data-related problems to be solved in contemporary business organizations. This introduction to the MIS Quarterly Special Issue on Business Intelligence Research first provides a framework that identifies the evolution, applications, and emerging research areas of BI&A. BI&A 1.0, BI&A 2.0, and BI&A 3.0 are defined and described in terms of their key characteristics and capabilities. Current research in BI&A is analyzed and challenges and opportunities associated with BI&A research and education are identified. This paper also reports a bibliometric study of critical BI&A publications, researchers, and research topics based on more than a decade of related academic and industry publications. Finally, the six articles that comprise this special issue are introduced and characterized in terms of the proposed BI&A research framework

    Creating Tallgrass Prairie Corridors for Species at Risk Using Geographic Information Systems in Norfolk County, ON

    Get PDF
    Anthropological land use in Norfolk County, Southwestern Ontario, has resulted in fragmentation of tallgrass prairie habitats which several species at risk are dependent upon. This research aims to create connectivity between fragmented habitats through the development of tallgrass prairie ecological corridors in Norfolk County. Using Geographic Information Systems-based Multi-Criteria Evaluation, attribute layers were weighted according to their relative importance and combined. Five models were developed to represent the varying habitat requirements for ten at-risk species. The most suitable values in each model were combined to create one habitat index map illustrating the best suitability for all species considered in the study. The habitat index map forms the cost surface used to perform a least-cost path analysis which illustrates the optimal corridor connecting core areas. Ideal lands for acquisition for corridor development are low cost, distant from urban built up areas, existing in natural landscapes, and connected to large reserve patches

    Artificial Neural Network for Predictingthe Success Rate beforeGraft Transplant

    Get PDF
    The artificial learning models such as artificial neural network, radial basis function and art map have shown a promising application in the medical industry.The present work is acomparative analysis of the above mentioned.The results of the investigation have indicated that among artificial neural network, radial basis function and art map the numeric values obtained fromartificial neural network werecomparativelybetter. Further, the analysis of the accuracy among the three selected algorithms was found98.9708%, 97.2556%, and 58.1475% respectively. According to literature survey performed, it is evident that most studies in this regard have received lesser attention, especially in India. Based on the findings it seems that artificial neural network could be the best mode topredict the graft survivals during liver transplantation

    Secure platforms for enforcing contextual access control

    Get PDF
    Advances in technology and wide scale deployment of networking enabled portable devices such as smartphones has made it possible to provide pervasive access to sensitive data to authorized individuals from any location. While this has certainly made data more accessible, it has also increased the risk of data theft as the data may be accessed from potentially unsafe locations in the presence of untrusted parties. The smartphones come with various embedded sensors that can provide rich contextual information such as sensing the presence of other users in a context. Frequent context profiling can also allow a mobile device to learn its surroundings and infer the familiarity and safety of a context. This can be used to further strengthen the access control policies enforced on a mobile device. Incorporating contextual factors into access control decisions requires that one must be able to trust the information provided by these context sensors. This requires that the underlying operating system and hardware be well protected against attacks from malicious adversaries. ^ In this work, we explore how contextual factors can be leveraged to infer the safety of a context. We use a context profiling technique to gradually learn a context\u27s profile, infer its familiarity and safety and then use this information in the enforcement of contextual access policies. While intuitive security configurations may be suitable for non-critical applications, other security-critical applications require a more rigorous definition and enforcement of contextual policies. We thus propose a formal model for proximity that allows one to define whether two users are in proximity in a given context and then extend the traditional RBAC model by incorporating these proximity constraints. Trusted enforcement of contextual access control requires that the underlying platform be secured against various attacks such as code reuse attacks. To mitigate these attacks, we propose a binary diversification approach that randomizes the target executable with every run. We also propose a defense framework based on control flow analysis that detects, diagnoses and responds to code reuse attacks in real time
    corecore